Conversation
Datadog ReportBranch report: ❌ 1 Failed (0 Known Flaky), 171831 Passed, 1139 Skipped, 11h 16m 6.38s Total duration (24m 50.11s time saved) ❌ Failed Tests (1)
|
Yun-Kim
left a comment
There was a problem hiding this comment.
Looking great! Added some comments to help clear some context / suggestions.
tests/snapshots/tests.contrib.anthropic.test_anthropic.test_anthropic_llm_sync.json
Outdated
Show resolved
Hide resolved
| def record_usage(self, span: Span, usage: Dict[str, Any]) -> None: | ||
| if not usage or not self.metrics_enabled: | ||
| return | ||
| for token_type in ("prompt", "completion"): |
There was a problem hiding this comment.
| for token_type in ("prompt", "completion"): | |
| for token_type in ("input", "output"): |
There was a problem hiding this comment.
why? I thought we were going with prompt and completion for the tag names?
| self.record_usage( | ||
| span, | ||
| {"prompt": _get_attr(usage, "input_tokens", 0), "completion": getattr(usage, "output_tokens", 0)}, | ||
| ) |
There was a problem hiding this comment.
AnthropicIntegration.record_usage() should be used to tag span metrics for the input/output/total token counts. We should have a separate AnthropicIntegration._set_llmobs_metrics_tags() to return the recorded span metric values, i.e.
def _get_llmobs_metrics_tags(span):
return {"input_tokens": span.get_metric("anthropic.response.usage.input_tokens"), "output_tokens": ..., "total_tokens": ...}And set that on the span, i.e. span.set_tag_str(METRICS, json.dumps(self._get_llmobs_metrics_tags(span))
There was a problem hiding this comment.
and we set this tag in the LLMObs Integration correct?
Fixes #6055.
Checklist
changelog/no-changelogis set@DataDog/apm-tees.Reviewer Checklist